Feature/ia#91
Conversation
Introduce a new Django "assistant" app and frontend chat UI. Adds models (Conversation, Message, CuratedDocument) with pgvector VectorFields and an initial migration that creates the vector extension. Implements services for embeddings (HuggingFace/sentence-transformers) and LLM-based chat (LangChain/OpenAI), plus tool functions to query platform data. Exposes REST endpoints and views (chat, conversations) and admin integration that auto-generates embeddings. Adds React components and CSS for a floating chat widget (ChatWidget, ChatPanel, ConversationList, MessageThread) and integrates URLs/templates. Update settings with assistant-related environment-backed configuration and include the assistant routes. Add runtime dependencies to config/requirements.txt (langchain, pgvector, sentence-transformers, openai, etc.) and switch docker-compose DB image to pgvector/pgvector:pg16 to enable vector support. Also add SonarLint connectedMode config.
Add comprehensive docs (AI_ASSISTANT.md) and integrate STRING support: three new tools in assistant.services.tools (interaction partners, functional enrichment, network image URL) and register them in make_tools. Update llm_service prompt to mention markdown rendering and embedding STRING images. Frontend: add react-markdown and remark-gfm deps, render assistant messages as markdown (with GFM) in MessageThread, add click-to-enlarge lightbox for images, and implement a resizable/persistent ChatPanel with resize handles. Add CSS for lightbox, markdown styling, and resize UI.
| return Response({'error': 'message is required'}, status=status.HTTP_400_BAD_REQUEST) | ||
|
|
||
| if conversation_id: | ||
| try: |
There was a problem hiding this comment.
Reemplazar este try/except con get_object_or_404
| else: | ||
| conversation = Conversation.objects.create(user=request.user) | ||
|
|
||
| from .services.llm_service import run_chat |
There was a problem hiding this comment.
Por qué está acá? Si es para evitar dependencias circulares agregar un comentario arriba
| **({"huggingface_api_token": hf_token} if hf_token else {}), | ||
| ) | ||
| # Silence future hub warnings once the model is loaded | ||
| os.environ.setdefault('TOKENIZERS_PARALLELISM', 'false') |
There was a problem hiding this comment.
- Todas estas variables de entorno deben tomarse en el settings.py, importarlos acá a través de
from django.conf import settingsy usarlos comosettings.tokenizers_parallelism. - Documentar todas las variables de entorno en el archivo DEPLOYING.dm
| @staticmethod | ||
| def _is_cached(model_name: str) -> bool: | ||
| """Check if the model weights are already in the local HuggingFace cache.""" | ||
| import os |
There was a problem hiding this comment.
Mover arriba, decile al boludo de Claude que los imports siempre van a arriba, desde 1980 🙏
|
|
||
| def get_llm(): | ||
| """Returns the configured LLM. Swap this function to change provider.""" | ||
| from langchain_openai import ChatOpenAI |
|
|
||
| const STORAGE_KEY = 'multiomix_chat_open' | ||
|
|
||
| const ChatWidget = () => { |
There was a problem hiding this comment.
Aplican todos los mismos comentarios de src/frontend/static/frontend/src/components/assistant/ChatPanel.tsx
| onDelete: (id: number) => void | ||
| } | ||
|
|
||
| const ConversationList = ({ conversations, activeId, onSelect, onNew, onDelete }: ConversationListProps) => { |
There was a problem hiding this comment.
Aplican todos los mismos comentarios de src/frontend/static/frontend/src/components/assistant/ChatPanel.tsx
| onSend: (text: string) => void | ||
| } | ||
|
|
||
| const MessageThread = ({ messages, isLoading, onSend }: MessageThreadProps) => { |
There was a problem hiding this comment.
Aplican todos los mismos comentarios de src/frontend/static/frontend/src/components/assistant/ChatPanel.tsx
| # Value used to indicate tha data is not present in a dataset | ||
| NON_DATA_VALUE: str = 'NA' | ||
|
|
||
| # AI Assistant settings |
There was a problem hiding this comment.
Documentar todas estas variables en el archivo DEPLOYING.md
| @@ -0,0 +1,532 @@ | |||
| # Multiomix AI Assistant | |||
There was a problem hiding this comment.
Multiomix es un proyecto abierto internacional. Reescribir este documento en inglés
Asistente de IA integrado en Multiomix
Se incorpora un asistente conversacional basado en LLM como widget flotante disponible en todas las páginas de la plataforma (solo para usuarios autenticados).
Qué se agregó
Nueva app Django: assistant
Servicio de embeddings (embedding_service.py)
Agente LLM (llm_service.py)
Documentación curada
Frontend React
Dependencias nuevas
langchain, langchain-openai, langchain-community, sentence-transformers, pgvector, torch (CPU).
Variables de entorno requeridas
Notas de despliegue